Living the Simulation
Living the Simulation
From Carbon-Based Wetware to Silicon Hardware
Joseph P. McFadden Sr.
Workshop — Slide Companion Reader
Note: Although I formed this presentation for distribution within Zebra Technologies, nothing discussed within is proprietary information, the content and theme go beyond individual companies or industries. As an educator I share this content to all curious individuals, that is my mission, learning to learn and sharing what I find.
Living the Simulation
[Slide 1] I'm Joe McFadden. Engineering Fellow at Zebra Technologies. And I want to start by telling you something that sounds like a science fiction premise — but is actually documented neuroscience. You live in a simulation right now. Not the Matrix kind. Something far more immediate, far more real, and frankly, far more mind-blowing. By the end of this session, I think you'll agree.
[Slide 2] This workshop has five parts. We start with the Why — my journey, what I found, and why it matters to engineers. Then we get into Living the Simulation — the neuroscience of how your brain actually works. From there, Getting Under the Hood — two tools I built to open the black box of simulation. Then a brief introduction to Holistic Simulation — Design, Material, Process, and Tooling as a unified system. And we close with the Series Roadmap and open discussion.
One requirement to be here: curiosity. That's it. That's the only prerequisite.
[Slide 3] Before we dive in, I want you to see the full picture. Today is session one — the foundation. Coming up: a Simulation Deep Dive into plastics, metals, and glass. Then Materials — understanding what we're actually modeling. Then Process — mold fill, die casting, process-driven behavior. And session five: Failure Investigations — reading the story the data is telling us. Each one builds on the last. Each one matters.
*
THE WHY
Why I Do This
[Slide 4] Before we talk about tools, or techniques, or simulation software — we need to talk about why. Why I started this series. Why it matters beyond Zebra. And why — if I'm right — it matters to every engineer who builds things that need to work in the real world.
[Slide 6] Take a moment. Really. Ask yourself: why am I here? What brought me to this room — or this screen — today? What does growth look like in my day-to-day work? Hold that question. We'll come back to it. Because everything in this series is built around it.
[Slide 5] I'm not claiming to be an expert on anything. I say that and I mean it. Expertise is an end point — and I don't believe in end points. What I can do is share some of my lived experience and let you gauge who I am from there.
Forty-six years in flow simulation, structural and thermal analysis. Expert witness on cases involving failure and loss of life — the most notable a submersible that imploded in Lake Michigan. Twenty-plus years as an educator — fracture mechanics, mechanics of materials, finite element methods. Deep in plastics, metals, glasses, composites, and fiber behavior. And most recently, a partnership with AI — not to outsource thinking, but to amplify it. Reworking decades of lectures, investigations, and case studies into new form. Sharing them. That is what this is.
[Slide 9] My why for today and for this series: to connect our most advanced engineering tools back to the first principles of human learning. How the environments we create shape the mental models we build. And how mastering that process is how we actually Lead through Innovation — not as a slogan, but as a practice.
The Observation
[Slide 7] Over decades of teaching — in academia, in industry, and in the school of hard knocks — I began noticing something that genuinely troubled me. A slow, quiet decline in curiosity. The willingness to sit with a hard question, resist the easy answer, and think critically through to something real — it was fading. Not dramatically. Just steadily. The decline was real. Measurable. And accelerating.
A five-year gap in teaching mechanics of materials gave me a natural experiment. I came back with the same exams from five years earlier. These students could not handle them. Not because the exam got harder. The material hadn't changed. The gap itself was the variable. Five years of something had shifted — and I was the only one who could see it clearly, because I had walked away and come back.
So I went looking for the mechanism. I did what I do. I got into the neuroscience. I ended up making Jeff Bezos a little richer buying books. Neuroscience books, evolutionary psychology books, anything that would tell me how we actually learn. How we evolved. And what was eroding.
What I Did About It
[Slide 8] Research. Learn. Develop. Validate. Share.
I dove into neuroscience — predictive coding, the free energy principle, the evolutionary roots of curiosity. Built on decades of reading Jung, Nietzsche, Solzhenitsyn, and beyond. Translated that research into teaching methods — active engagement, productive struggle, restoring the prediction-error loop that passive learning removes. Then I went to AI as a Socratic partner — Claude, Grok, Gemini, Perplexity, ChatGPT — stress-testing ideas, sharpening arguments. Not to outsource thinking. To sharpen it.
And I built tools. CAE_INP — Input Navigator Pro. And DSP — Mechanical Digital Signal Processing Plus. Built not to do the work for you. Built to crack open the black box. To make engineers better partners to their simulations. Drivers. Not just consumers of outputs.
*
THE PROVOCATIVE TRUTH
You Evolved to Be Intellectually Lazy
[Slide 10] Of everything that research revealed — one idea unlocked the rest. Once I understood it, the confusion I had felt for years — about why learning was hard, for me and for my students — finally made sense. It didn't just explain the problem. It pointed to the solution. The next idea is that idea.
[Slide 11] You evolved to be intellectually lazy. That's why we simulate.
Hold on — before you react. That is not an insult. That is a biological fact. And understanding it — really understanding it — is what makes the rest of this presentation land.
[Slide 12] Your brain has an energy budget. Processing every sensory input from scratch, every millisecond of every day, would be metabolically catastrophic. So it doesn't.
Instead it predicts. It runs a shortcut — a simulation — of what's likely coming next, checks it against reality, and only processes the difference. Lazy? Yes. Brilliant? Absolutely. Because prediction is orders of magnitude cheaper than perception. And in a world where energy means survival, cheap is everything.
[Slide 13] When your brain's model is accurate — when reality matches the prediction — incoming data costs almost nothing to process. Just a quick confirmation. World as expected. Continue. When the model is wrong — when something doesn't match — the prediction error fires hard. That's the learning signal. That's the update. Pay attention. Revise. Intellectual laziness, precisely directed, is called intelligence. That one sentence is worth sitting with.
[Slide 14] We do the same thing externally. We build models of physical systems and run them forward — because poking the real thing every time is slow, expensive, and sometimes fatal. You can't poke a bridge to find out if it collapses under load. You can't poke a human heart to see how it responds to a new valve. So we build virtual versions and poke those instead.
Simulation is the brain's lazy shortcut, scaled up and made precise in silicon. Same instinct. Different substrate. Same purpose.
[Slide 15] This is not an insult. It is a condition we all share. When you feel frustrated, confused, or tempted to stop — that is your brain sending a signal. It is doing exactly what it evolved to do: spending energy is expensive, so it pushes back. Most people read that signal as failure. They stop — and the learning stops with them. But when you recognize the signal for what it is, you gain something powerful: a choice. The same choice a person in a gym faces when the burn starts. Quit, or push through and grow. Awareness doesn't silence the signal. It makes you the one who decides what happens next.
*
CONTEXT
This Is Not Just About Engineering
[Slide 16] What I'm about to introduce may look like engineering topics, or Zebra-specific content. It is not.
The principles we'll discuss — prediction, simulation, curiosity, model-building, the cost of passive learning — are not engineering constructs. They are human ones. They govern how you learn. How you make decisions. How you raise children. How you stay sharp as you age. How you navigate uncertainty in every domain of your life.
[Slide 17] This is not Joe hyperbole. It is grounded in neuroscience and evolutionary psychology — fields that have spent decades mapping exactly how the brain builds models, updates them, and degrades when it stops. The engineering examples are the vehicle. Understanding your own brain is the destination.
[Slide 18] At work: how you approach problems, build expertise, and grow your predictive ability over time — or outsource it to the tool, the shortcut, the easy answer, and quietly decline. At home: how children learn, and what passive consumption does to that. How habits form. How relationships build shared models of the world, or drift apart when those models stop updating. In yourself: how you stay curious. How you maintain the productive struggle that keeps the brain rewiring toward mastery. How you recognize the difference between rest and stagnation.
[Slide 19] My discussions on these matters will be brief — enough to open the door, not to walk you through every room. What you do with that door is entirely up to you. I'm simply here to make sure you know the door exists.
*
OUR MISSION
The Mission and the Path to Quality
[Slide 20] Why are you here? May I suggest an answer: personal and professional growth — in service and support of Zebra's mission.
[Slide 21] Zebra's mission: to empower those on the front line — in retail, e-commerce, transportation and logistics, manufacturing, healthcare and other industries — to achieve a performance edge. An edge that translates to delighted customers, good patient outcomes, and superior business results.
[Slide 22] Every simulation we run, every thought experiment we use to improve a design while reducing cost, every what-if we envision to optimize how we deliver — each of these deepens our understanding of how physical and operational systems actually behave. That knowledge doesn't stay in the software. It flows back into the design, the process, the service model. And ultimately into the product that our partners rely on. That is the connection between personal growth and Zebra's mission. Being here, asking those questions — that is personal growth in the truest sense.
[Slide 23] Three words from that mission deserve attention. Empower the front line — not just supply it, empower it. The people doing the actual work, in their hands, every day. A performance edge — not just tools, a competitive, measurable advantage. And delighted customers — not satisfied, not served, delighted. Delighted customers start with quality products. Quality starts with creativity and innovation. And creativity and innovation start within each of us.
[Slide 24] Partners, not customers. That is Zebra's core relationship with those we serve. Partnership changes the standard. A customer you satisfy. A partner you empower. That's a fundamentally different design target.
And what is quality? Not the mundane. Not just meets-spec. Quality is something that not just persists — but allows the user to fulfill, if not advance, their goal. Fostering, not impeding. Quality just does not happen. Creativity and innovation are the driving force — and they come from within each of us.
[Slide 25] The chain is simple and it's important. We make partners happy by delivering solutions in physical devices. Those devices must be reliable — when a device helps a partner reach their goal every time, without obstruction, it is quality. Quality doesn't happen by accident. It is the product of creative thinking applied through disciplined innovation. That's where CAE lives. But before we get into CAE — let's follow the chain back to its origin.
The Chain: Survival to Quality
[Slide 26] So where does quality come from? The short answer is obvious — it comes from within us. Within each of us. But let's follow the chain carefully, because the chain is everything.
[Slide 27] True understanding requires seeking first principles. And the first principle behind all of this — behind curiosity, creativity, innovation, and quality — is survival. Our ancestors didn't explore out of leisure. Every unexplored territory, unfamiliar plant, and strange sound could mean survival or death. Curiosity reduced that uncertainty. Evolution encoded it — the organisms that survived weren't just the strongest. They were the ones who kept asking what's behind that rock. Natural selection favored the curious brain. And innovation is just navigation — learning to move through a world that resists us. To understand why we build simulations, we first need to understand what we are.
[Slide 28] Step one. Survival. The non-negotiable baseline of all biology. Every organism alive today descends from an unbroken chain of survivors — beings who outcompeted, outlearned, and outlasted the dangers around them. What did survival demand above all else? Information. The organisms that could gather, process, and act on information about their environment outlived those that couldn't. Over millions of years, natural selection built a reward circuit for information-seeking directly into the brain. That circuit is curiosity. Survival built it into us.
[Slide 29] Step two. Curiosity. Not a personality quirk. A survival circuit. The brain releases dopamine when we explore, discover, and resolve unknowns. That aha moment you feel when something clicks — that is not random happiness. That is your brain confirming that a prediction was resolved. That a model was updated. That is the learning signal firing. A curious mind doesn't just accumulate knowledge. It begins to see relationships between pieces of knowledge that no one else has connected yet. Curiosity fills the tank. Creativity fires the engine.
[Slide 30] Step three. Creativity. Not magic. Pattern recognition across domains. The brain sees that one thing behaves like another and asks: what if we applied one to solve the other? Our ancestors didn't invent from nothing — they recombined what they already knew in ways no one had tried before. Creativity generates the hypothesis. Innovation runs the experiment. Creativity is the spark. Innovation is the fire that follows when that spark meets fuel.
[Slide 31] Step four. Innovation. The act of translating a creative idea into a solution that works in the real world. Not just invention — innovation means something actually functions, solves a real problem, and survives contact with reality. The first spear. The first boat. The wheel. Every innovation in human history was tested against the world — and most early versions failed. That failure was not the opposite of innovation. It was the engine of it.
[Slide 32] Step five. Quality. The state where a solution doesn't just function — it anticipates needs, eliminates friction, and empowers the user to exceed their own goals. Quality is felt as the absence of obstruction. At Zebra: when quality is right, the device disappears. The partner advances. They don't think about the tool. They just do their job better. And the full circle: quality is not the end of the chain. It feeds back into survival — for our partners, for Zebra, for us.
*
SIMULATION
What Is Simulation, and Where Did It Come From?
[Slide 33] A simulation is a model of reality used to understand, predict, or test outcomes — without the cost, risk, or time of doing it for real. Every time you imagine what might happen before it does, you are simulating. It is not a software feature. It is a way of thinking.
Where did simulation come from? Not computers. It started with the first organism that could predict the outcome of an action before taking it. The brain may be the original simulation engine. We evolved the capacity to run internal simulations long before we had silicon to run them externally.
And why? Because reality is expensive to test. Failure in the real world can be catastrophic, irreversible, or fatal. Simulation gives us the ability to ask what if safely — to explore, to iterate, to fail cheaply — before we commit to the physical world. Of all species on Earth, we may be nature's best simulators.
[Slide 34] Does thinking of your brain as a prediction engine change anything about how you approach your work? Have you ever caught yourself running a mental simulation — playing out a scenario before acting on it? If we are natural simulators, what does that tell us about why CAE tools feel intuitive to engineers? These are not rhetorical questions. Hold them.
*
THE NEUROSCIENCE
Your Brain Is Running a Simulation Right Now
[Slide 35] You live in a simulation. Not the Matrix kind. The simulation your brain is running right now. Every millisecond. Everything you see, hear, and feel is your brain's best prediction of reality — not reality itself. This is not philosophy. This is neuroscience. This is how your brain actually works.
[Slide 36] Three mechanisms make this work.
Predictive coding: your brain generates top-down predictions about reality — every sound, sight, and sensation — before sensory data even arrives. It's not waiting for the world to tell it what's happening. It's guessing — and then checking.
Prediction errors: when reality doesn't match the prediction, the mismatch gets amplified. Your brain says: pay attention — update the model. This is learning. This is literally what learning is.
The free energy principle: Karl Friston — one of the most cited neuroscientists alive — gave us the mathematical framework. The brain is a machine designed to minimize surprise. It runs a predictive simulation of the world — constantly refined. You don't experience reality. You experience your brain's running model of it.
Free Energy, Entropy, and Why We Simulate
[Slide 37] Karl Friston's framework tells us the brain has one master drive: minimize free energy. Free energy is the gap between what the brain predicts and what it senses — a measure of surprise, of uncertainty, of things not yet understood. Every prediction, every model update, every simulation run is the brain working to collapse that gap. This is not metaphor. This is the mathematical engine underneath every act of perception, action, and learning.
Entropy is the opponent. The second law of thermodynamics is undefeated — every system naturally moves toward disorder. For a living system, equilibrium is death. Life is the continuous, active, energy-consuming effort to hold entropy at bay. The brain's prediction machinery is one of evolution's most powerful anti-entropy tools.
In engineering the stakes are identical. A device that fails in the field has reached equilibrium. Every simulation — thermal, structural, flow — is free energy minimization in silicon, postponing entropy for one more cycle. The thermal analysis that catches a hot spot. The FEA that finds the stress concentration. The flow simulation that reveals the weld line before the mold is cut. Each one postponing entropy for one more cycle.
This is not philosophy. This is thermodynamics. And it is why we simulate.
Why We Poke — Curiosity as Survival
[Slide 38] To predict, you need a model. And to build a model, you need to poke. Always. From birth.
Watch a baby with a brand new toy. They don't just look at it. They shake it. Drop it. Bang it. Taste it. Rotate it. Not random play — systematic empirical investigation. They are building a model of the world, one poke at a time. Every scientist, every engineer, every curious person does the same thing. It's biological. It never stops.
But physical poking is slow, expensive, and sometimes fatal. So the brain evolved a workaround: run the test inside your head before running it in the world. This is the origin of all simulation — not in computers, not in engineering software, but in the predictive brain's need to test hypotheses without risking actual harm. Curiosity is not a luxury. It's survival. Evolution favored brains that reduce uncertainty.
To Predict, You Need a Model
[Slide 39] A model is your brain's best current representation of how something in the world works — built from everything you have experienced, tested, observed, and updated. Without a model, prediction is impossible. With a poor model, prediction is dangerous. Prediction is only as good as the model behind it. The model is only as good as the poking that built it. Poking never stops — and neither does the model.
[Slide 40] Where does the model come from? From lived experience. Every observation. Every failure. Every success. Every conversation. Every course. Every poke. The richness of your model is proportional to the richness of your experience. You cannot shortcut this. You can't copy someone else's model. You can't buy it. You build it — through experience, through struggle, through the slow accumulation of pokes that went the way you expected and pokes that didn't.
[Slide 41] Is the model ever finished? No. A model that stops updating is a model that starts failing. Every new poke either confirms what you know or teaches you something it didn't. The goal is not completion. The goal is continuous refinement.
[Slide 42] Every deliberate poke is a hypothesis test. Expected results confirm a rule. Unexpected results reveal a gap. Over a career, this accumulates into something that cannot be taught in a classroom: a rich, textured, battle-tested library of models. That library is your most valuable engineering asset. Not your software licenses. Not your certifications. That internal library — built through decades of deliberate poking — is what makes your judgment irreplaceable.
[Slide 43] Think of your model library as a dynamic living document. It is never finished — every experience adds a page, every failure rewrites one. It is cross-domain — a pattern from one system illuminates another, and that illumination is creativity. And it compounds — the longer you poke, the faster new experiences find a home. The engineer with thirty years of pokes doesn't just know more. They think differently. They pattern-match faster. They catch what others miss.
From Wetware to Hardware
[Slide 44] The brain is extraordinary — but it has hard limits. Working memory caps at roughly seven items at once. Processing degrades with fatigue. Arithmetic is approximate at best. You can't track millions of variables simultaneously.
So we built machines that don't have those limitations. Computers hold millions of variables. They run without fatigue or drift. They're perfectly precise. They explore entire parameter spaces overnight.
When an engineer builds a finite element model and runs a structural analysis, they are doing exactly what the brain does — only now at a scale and precision neurons alone cannot reach. FEA stress contours are prediction errors made visible. The substrate changed from neurons to silicon. The process is identical. We extended what we are — we didn't replace it.
The Through-Line
[Slide 45] There is no fundamental break in this chain. Physics: cause and effect. Biology builds on physics — cells, DNA, evolution. Brains build on biology — neurons, prediction, learning. Technology builds on brains — silicon, FEA, simulation.
It's not turtles all the way down. That old joke describes a foundation that never arrives — an explanation that explains nothing. Our chain has a foundation: physics. And it has a single organizing principle running through every level above it: prediction. That's not turtles. That's a spine.
Don't treat simulations as magic black boxes. Understand what they're doing. Question their assumptions. Test their predictions against reality. They're tools to extend your thinking — not replace it.
[Slide 46] The brain is the original simulation engine. Silicon is just how we turned up the volume.
*
ENGINEERING MIND BLINDNESS
The Black Box
[Slide 47] Geometry, material, process, loading — go into the box. Stress, deflection, failure, pass-fail — come out of the box. And most people stop there. They accept the colors on the screen. They accept the number. They move on.
That is mind blindness. It happens when we accept the outputs without questioning what's happening inside. When the prediction error never fires because we never compare the output to our own model. When we outsource the judgment to the software — and forget to bring it back.
Let me be specific about what this looks like. I've watched it happen too many times over decades in labs, factories, and classrooms. Brilliant, highly educated teams shipping products that fail in the field. And the culprit isn't stupidity or lack of effort. It's silo-smart, mechanism-thin thinking. You optimize one part brilliantly while the system quietly breaks. You follow the spec to the letter but ignore how the device will actually be used. You trust the simulation output and never dig for the underlying cause.
The symptoms are recognizable once you know what to look for. We test to laboratory standards — but not to real service reality. We chase easy measurements instead of leading indicators that actually predict failure. We treat outputs as truth without questioning the assumptions baked into the model.
And the paradoxes are the ones that should stop us cold. You add a bigger heat sink — and field temperatures get worse because you changed the airflow path. A gentle immersion test passes — but localized field exposure causes failure because of kinetics and hot spots you never modeled. One microservice runs faster — but the overall product slows down due to queuing and hidden dependencies you never mapped. These are not rare edge cases. They are symptoms of treating complex systems as collections of independent parts instead of an interdependent whole.
Open It
[Slide 48] The ultimate tool to decode any black box isn't a software patch. It's you. Your curiosity. Your lived experience. Your willingness to look closely. This is not a small thing. It's the difference between an engineer who runs the simulation and an engineer who understands it. Between a pass-fail number and a decision grounded in judgment. Between a tool that serves you and one that replaces you.
The antidote to mind blindness isn't more computation or bigger models. It's a shift in how we think. Map the whole system — Design, Material, Process, Tooling, and everything around them. Nothing lives in isolation. Go mechanism-first — understand cause and effect before you trust the dashboard. Design out concentrators and residuals — reduce peaks and hidden stresses, not just averages. Test like the field — realistic duty cycles, localized loads, full interactions. Use leading indicators — energy, flow, strain — not just pass-fail. And close the loop — pre-mortems, field-to-design feedback, relentless cross-discipline reviews.
Mind blindness doesn't happen because engineers are careless. It happens because the modern workflow — fast simulations, automated reports, pressure for speed — quietly rewards checking the box over understanding the story the data is telling. Every failure has a story. Understanding that story is the key to prevention.
*
GETTING UNDER THE HOOD
Your Tools for Opening the Black Box
[Slide 49] Two tools. Both built around the same principle: don't just run the simulation — understand it.
[Slide 50] CAE_INP — Input Navigator Pro. A production-ready learning tool that gives engineers deeper visibility into simulation models. INP opens the black box — letting you see, question, and understand the inputs that drive every analysis before you accept the outputs. Better inputs, better predictions, better product.
INP is built to give engineers — designers, program managers, anyone working with simulation results — deeper visibility into the model that produced those results. Every simulation starts with an input file. That file contains every decision the analyst made: the materials, the boundary conditions, the element types, the contact definitions, the loading setup. Most of those decisions are invisible in the results. You see the stress plot. You don't see what built it. INP changes that. It lets you look at the model and say: what was actually modeled here, and does it match what I asked for? That question alone — asked consistently — catches more problems than any algorithm.
[Slide 51] In an over-molding simulation, when two parts are modeled as bonded together the surfaces need to actually be tied in the model. If they're not, one part can be floating inside another. The analysis runs. The colors look fine. And the result is garbage. INP identifies those surfaces, reports the percentage of coincident coverage, and flags partial bonds that could invalidate the result. This is the kind of thing you don't catch by looking at the output. You catch it by looking at the input.
[Slide 52] When you run a drop analysis, the input file doesn't say I'm dropping this from six feet onto this surface — it says here is a velocity applied in this direction. INP back-calculates that to a drop height and a drop angle, then displays it — center of mass, velocity vector, hit surface, nearest node — so you can see at a glance whether the setup is what you intended. Did the center of mass go where you think it went? Is the impact surface correct? Is the angle right? These are questions that should be asked every time. INP makes asking them fast.
[Slide 53] One of the most common — and most consequential — mistakes in simulation is wrong material properties. Not dramatically wrong. Subtly wrong. A density in the wrong unit system. A modulus that doesn't match your actual part geometry. A plastic curve pulled from a generic database and never verified. INP lists every material in the model, shows you where it's applied, and plots the actual property curves. If those curves don't match your physical understanding of the material — that is a prediction error. Catching it before the analysis runs is far better than catching it after the product fails.
[Slide 54] MDSP — Mechanical Digital Signal Processing Plus. Digital poking at scale. Transforming the ancient biological drive to test and explore into a structured, repeatable engineering workflow. Baby with a toy: shake it, drop it, squeeze it. Engineer at a bench: apply load, measure response, iterate. DSP: simulate poke, analyze signal, optimize. Same drive. Different resolution. Both tools include comprehensive learning centers built in.
[Slide 55] MDSP handles the poke, the system, and the response — with guidance on what each one means and when to use it. Load accelerometer data from the field, from slam sticks, from test rigs — or generate a signal synthetically if you're learning the tool without real data. Time-domain analysis, FFT, power spectral density, key statistics — all with guidance on what each one means and when to use it.
[Slide 56] MDSP includes a library of fifty-seven industry standards across aerospace, automotive, military, consumer wearables, and more. Take your measured PSD — your actual product's real-world vibration environment — and compare it directly to the standard that governs your application. Are you within MIL-STD-810H? NASA GEVS? Your customer's spec? You can see it directly. Transmissibility analysis shows you how energy flows through the system — what gets amplified, what gets absorbed, where the structure is vulnerable.
The goal was never to automate the poking. The goal was to make better pokers. Engineers who understand what their tools are actually doing — so that every simulation run makes them sharper, not more dependent.
*
HOLISTIC SIMULATION
Design · Material · Process · Tooling — Unified
[Slide 57] Design. Material. Process. Tooling. Unified.
This is what I mean by holistic simulation. The geometry you design drives the flow. The flow determines fiber orientation. Fiber orientation controls the anisotropic material properties. Those properties determine whether your structural analysis means anything — or is meaningless. Pull one thread and the entire fabric changes. None of these can be treated in isolation.
The principle I carry into every case, every course, every simulation I run: you cannot process out a bad design. You can however, process out a good one.
When you truly practice holistic simulation — when you map the whole system and go mechanism-first — something shifts. You stop optimizing parts and start understanding wholes. You stop trusting outputs and start interrogating inputs. You become a better predictor. A better engineer. And frankly, a better thinker.
When you combat engineering mind blindness, you don't just build better products. You restore the curiosity and predictive power that evolution gave us in the first place. You turn simulation from a black box into a teacher. And that — more than anything else — is how we deliver the quality our partners deserve.
*
THE ROAD AHEAD
What This Means for You
[Slide 58] The series continues. Session two: Simulation Deep Dive — plastics, metals, and glass. Session three: Materials — understanding what we're actually modeling beneath the surface. Session four: Process — mold fill, die casting, process-driven behavior and what it does to your results. Session five: Failure Investigations — reading the story the data is telling us, and understanding what went wrong before it happens again. Each session builds on the last. Curiosity is still the only prerequisite.
[Slide 59] Now it's your turn. What surprised you most today? Where do you feel most mind blind in your current work? What would make INP or DSP most useful in your workflow? What topics in this series are you most curious about?
Come ready to think differently. I mean that literally — thinking differently is not a metaphor here. It is the outcome. It is the goal. And it is available to every person in this room.
Recognize what you are doing when you simulate: you are asking a question, testing a hypothesis, building a better model, reducing uncertainty. Keep poking — thoughtfully. Keep questioning. Keep simulating — mentally and computationally. Curiosity is the only prerequisite.
Every failure tells a story, and understanding that story is the key to prevention.
[Slide 60] Thank you for spending your time with me. That is not a small thing. Time is the only resource you can't get back. I don't take it lightly.
Keep thinking. Keep questioning. Keep simulating.
Joseph P. McFadden Sr.